59 research outputs found

    An Interpretable Multiple-Instance Approach for the Detection of referable Diabetic Retinopathy from Fundus Images

    Full text link
    Diabetic Retinopathy (DR) is a leading cause of vision loss globally. Yet despite its prevalence, the majority of affected people lack access to the specialized ophthalmologists and equipment required for assessing their condition. This can lead to delays in the start of treatment, thereby lowering their chances for a successful outcome. Machine learning systems that automatically detect the disease in eye fundus images have been proposed as a means of facilitating access to DR severity estimates for patients in remote regions or even for complementing the human expert's diagnosis. In this paper, we propose a machine learning system for the detection of referable DR in fundus images that is based on the paradigm of multiple-instance learning. By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy. Moreover, it can highlight potential image regions where DR manifests through its characteristic lesions. We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance, while also producing interpretable visualizations of its predictions.Comment: 11 page

    Listen2YourHeart: A Self-Supervised Approach for Detecting Murmur in Heart-Beat Sounds

    Full text link
    Heart murmurs are abnormal sounds present in heartbeats, caused by turbulent blood flow through the heart. The PhysioNet 2022 challenge targets automatic detection of murmur from audio recordings of the heart and automatic detection of normal vs. abnormal clinical outcome. The recordings are captured from multiple locations around the heart. Our participation investigates the effectiveness of selfsupervised learning for murmur detection. We train the layers of a backbone CNN in a self-supervised way with data from both this year's and the 2016 challenge. We use two different augmentations on each training sample, and normalized temperature-scaled cross-entropy loss. We experiment with different augmentations to learn effective phonocardiogram representations. To build the final detectors we train two classification heads, one for each challenge task. We present evaluation results for all combinations of the available augmentations, and for our multipleaugmentation approach. Our team's, Listen2YourHeart, SSL murmur detection classifier received a weighted accuracy score of 0.737 (ranked 13th out of 40 teams) and an outcome identification challenge cost score of 11946 (ranked 7th out of 39 teams) on the hidden test set.Comment: To be published in the proceedings of CinC 2022 (https://cinc.org/). This is a preprint version of the final pape

    A Learnable Model with Calibrated Uncertainty Quantification for Estimating Canopy Height from Spaceborne Sequential Imagery

    Get PDF
    This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. For more information, see https://creativecommons.org/licenses/by-nc-nd/4.0/.Global-scale canopy height mapping is an important tool for ecosystem monitoring and sustainable forest management. Various studies have demonstrated the ability to estimate canopy height from a single spaceborne multispectral image using end-to-end learning techniques. In addition to texture information of a single-shot image, our study exploits multi temporal information of image sequences to improve estimation accuracy. We adopt a convolutional variant of a long short-term memory (LSTM) model for canopy height estimation from multitemporal instances of Sentinel-2 products. Furthermore, we utilize the deep ensembles technique for meaningful uncertainty estimation on the predictions and postprocessing isotonic regression model for calibrating them. Our lightweight model (āˆ¼320k trainable parameters) achieves the mean absolute error (MAE) of 1.29 m in a European test area of 79 km2. It outperforms the state-of-the-art methods based on single-shot spaceborne images as well as costly airborne images while providing additional confidence maps that are shown to be well calibrated. Moreover, the trained model is shown to be transferable in a different country of Europe using a fine-tuning area of as low as āˆ¼2 km2 with MAE = 1.94 m.publishedVersio

    VITALAS at TRECVID-2008

    Get PDF
    In this paper, we present our experiments in TRECVID 2008 about High-Level feature extraction task. This is the first year for our participation in TRECVID, our system adopts some popular approaches that other workgroups proposed before. We proposed 2 advanced low-level features NEW Gabor texture descriptor and the Compact-SIFT Codeword histogram. Our system applied well-known LIBSVM to train the SVM classifier for the basic classifier. In fusion step, some methods were employed such as the Voting, SVM-base, HCRF and Bootstrap Average AdaBoost(BAAB)

    Machine learning for identifying emergent and floating aquatic vegetation from space: a case study in the Dniester Delta, Ukraine

    Get PDF
    Monitoring aquatic vegetation, including both floating and emergent types, plays a crucial role in understanding the dynamics of freshwater ecosystems. Our research focused on the Lower Dniester Basin in Southern Ukraine, covering approximately 1800 square kilometers of steppe plains and wetlands. We applied traditional machine learning algorithms, specifically random forest and boosting trees, to analyze Sentinel-2 satellite imagery for segmenting aquatic vegetation into emergent and floating types. Our methodology was validated against detailed in-situ field measurements collected annually over a 5-year study period. The machine learning classifiers achieved an F1-score of 0.88 Ā± 0.03 in classifying floating vegetation, outperforming our previously suggested histogram-based thresholding methodology for the same task. While emergent vegetation and open water were easily identifiable from satellite imagery, the robustness and temporal transferability of our methodology included accurately delineating floating vegetation as well. Additionally, we explored the significance of various features through the Minimum Redundancy - Maximum Relevance algorithm. This study highlights advancements in aquatic vegetation mapping and demonstrates a valuable tool for ecological monitoring and future research endeavors
    • ā€¦
    corecore